detect child abuse image
The Download: AI to detect child abuse images, and what to expect from our 2025 Climate Tech Companies to Watch list
Plus: OpenAI's parental controls have come into force Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing. The Department of Homeland Security's Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco-based Hive AI for its software, which can identify whether a piece of content was AI-generated. The need to cut emissions and adapt to our warming world is growing more urgent. This year, we've seen temperatures reach record highs, as they have nearly every year for the last decade. Climate-fueled natural disasters are affecting communities around the world, costing billions of dollars.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Asia > China (0.07)
- Asia > South Korea (0.06)
- North America > United States > Massachusetts (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.97)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.58)
US investigators are using AI to detect child abuse images made by AI
Though artificial intelligence is fueling a surge in synthetic child abuse images, it's also being tested as a way to stop harm to real victims. Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing. The Department of Homeland Security's Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco-based Hive AI for its software, which can identify whether a piece of content was AI-generated. The filing, posted on September 19, is heavily redacted and Hive cofounder and CEO Kevin Guo told that he could not discuss the details of the contract, but confirmed it involves use of the company's AI detection algorithms for child sexual abuse material (CSAM). The filing quotes data from the National Center for Missing and Exploited Children that reported a 1,325% increase in incidents involving generative AI in 2024.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- North America > United States > Massachusetts (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
AI tool detects child abuse images with 99% accuracy
A new AI-powered tool claims to detect child abuse images with around 99 percent accuracy. The tool, called Safer, is developed by non-profit Thorn to assist businesses which do not have in-house filtering systems to detect and remove such images. According to the Internet Watch Foundation in the UK, reports of child abuse images surged 50 percent during the COVID-19 lockdown. In the 11 weeks starting on 23rd March, its hotline logged 44,809 reports of images compared with 29,698 last year. Many of these images are from children who've spent more time online and been coerced into releasing images of themselves.
- Europe > United Kingdom (0.26)
- North America > United States > California (0.06)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
Artificial intelligence will detect child abuse images
A pilot scheme will see machine learning taught how to grade the severity of the disturbing photos and footage, saving detectives from the distressing task. If successful, the trial could go into full service'within two to three years', according to the force behind its development. The approach is not without its drawbacks, including the legal ramifications of uploading such sensitive information. Police are granted legal permission from the courts to store criminal images. This protection would not apply to any cloud storage service providers.
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.73)
- Information Technology > Security & Privacy (0.57)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.43)
Artificial intelligence will detect child abuse images to save police from trauma
Artificial intelligence will take on the gruelling task of scanning for images of child abuse on suspects' phones and computers so that police officers are no longer subjected to psychological trauma within "two to three years". The Metropolitan Police's digital forensics department, which last year trawled through 53,000 different devices for incriminating evidence, already uses image recognition software but it is not sophisticated enough to spot indecent images and video, Mark Stokes, the Met's head of digital and electronics forensics, told the Telegraph. "We have to grade indecent images for different sentencing, and that has to be done by human beings right now, but machine learning takes that away from humans," he said. "You can imagine that doing that for year-on-year is very disturbing." The force is currently drawing up an ambitious plan to move its sensitive data to cloud providers such as Amazon Web Services, Google or Microsoft, Mr Stokes said.